Fast dropout training
نویسندگان
چکیده
Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (Hinton et al., 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do fast dropout training by sampling from or integrating a Gaussian approximation, instead of doing Monte Carlo optimization of this objective. This approximation, justified by the central limit theorem and empirical evidence, gives an order of magnitude speedup and more stability. We show how to do fast dropout training for classification, regression, and multilayer neural networks. Beyond dropout, our technique is extended to integrate out other types of noise and small image transformations.
منابع مشابه
Fast Learning with Noise in Deep Neural Nets
Dropout has been raised as an effective and simple trick [1] to combat overfitting in deep neural nets. The idea is to randomly mask out input and internal units during training. Despite its usefulness, there has been very little and scattered understanding on injecting noise to deep learning architectures’ internal units. In this paper, we study the effect of dropout on both input and hidden l...
متن کاملTo Drop or Not to Drop: Robustness, Consistency and Differential Privacy Properties of Dropout
Training deep belief networks (DBNs) requires optimizing a non-convex function with an extremely large number of parameters. Naturally, existing gradient descent (GD) based methods are prone to arbitrarily poor local minima. In this paper, we rigorously show that such local minima can be avoided (upto an approximation error) by using the dropout technique, a widely used heuristic in this domain...
متن کاملOn Fast Dropout and its Applicability to Recurrent Networks
Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients. The control of overfitting has seen considerably less attention. This paper contributes to that by analyzing fast dropo...
متن کاملFast “dropout” training for logistic regression
Recently, improved classification performance has been achieved by encouraging independent contributions from input features, or equivalently, by preventing feature co-adaptation. In particular, the method proposed in [1], informally called “dropout”, did this by randomly dropping out (zeroing) hidden units and input features for training neural networks. However, sampling a random subset of in...
متن کاملRegularizing Recurrent Networks - On Injected Noise and Norm-based Methods
Advancements in parallel processing have lead to a surge in multilayer perceptrons’ (MLP) applications and deep learning in the past decades. Recurrent Neural Networks (RNNs) give additional representational power to feedforward MLPs by providing a way to treat sequential data. However, RNNs are hard to train using conventional error backpropagation methods because of the difficulty in relating...
متن کامل